perm filename SEARLE[W90,JMC]3 blob sn#884695 filedate 1990-05-28 generic text, type C, neo UTF8
COMMENT āŠ—   VALID 00003 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	%searle[w90,jmc]		Notes for Chinese Room symposium
C00005 00003
C00006 ENDMK
CāŠ—;
%searle[w90,jmc]		Notes for Chinese Room symposium
harnad@clarity.princeton.edu
Abstract for Searle symposium
John Searle begins his (1990)
 ``Consciousness, Explanatory Inversion and Cognitive Science''
with

	     ``Ten years ago in this journal I published an
     article (Searle, 1980a and 1980b) criticising what I
     call Strong AI, the view that for a system to have
     mental states it is sufficient for the system to
     implement the right sort of program with right inputs
     and outputs.  Strong AI is rather easy to refute and
     the basic argument can be summarized in one sentence: {\it a
     system, me for example, could implement a program for
     understanding Chinese, for example, without
     understanding any Chinese at all.}  This idea, when
     developed, became known as the Chinese Room Argument.''

The Chinese Room Argument can be refuted in one sentence:

{\it Searle confuses the mental qualities of one computational
process, himself for example, with those of another process that
the first process might be interpreting, a process that
understands Chinese, for example.}

	That accomplished, the lecture will
discuss the ascription of mental qualities to machines
with special attention to the relation between syntax and
semantics, i.e. questions suggested by the Chinese Room
Argument.  I will deal explicitly with Searle's four ``axioms'',
which, although they don't have a unique interpretation, suggest
various ideas worth discussing.

The intuition behind the opinion that computation can't be thinking.

Eliza isn't thinking, and it isn't even on the road to thinking.

The algorithm for a successful Chinese room involves
a lot of declarative information.